我们研究图形神经网络(GNN)的节点分类任务,并在群体公平性(通过统计平等和均等机会衡量)之间建立联系,以及局部分类性,即连接节点的趋势具有相似的属性。这种分类性通常是由同质性诱导的,即相似特性的节点连接的趋势。同质性在社交网络中可能很常见,在社交网络中,系统性因素迫使个人进入具有敏感属性的社区。通过合成图,我们研究了本地发生的同质和公平预测之间的相互作用,发现并非所有节点邻居在这方面都相等 - 社区以敏感属性的一类类别为主导,通常会努力获得公平的治疗,尤其是在分化本地类别和敏感属性同质。在确定存在局部同质和公平之间的关系之后,我们研究了不公平的问题是否与应用的GNN模型的设计相关联。我们表明,通过采用能够处理拆卸组标签的异性GNN设计,与真实和合成数据集中的同质设计相比,可以将本地异性邻居中的群体公平提高25%。
translated by 谷歌翻译
图形神经网络(GNNS)已被证明是在预测建模任务中的Excel,其中底层数据是图形。然而,由于GNN广泛用于人以人为本的应用,因此出现了公平性问题。虽然边缘删除是用于促进GNNS中公平性的常用方法,但是当数据本质上缺少公平连接时,它就无法考虑。在这项工作中,我们考虑未删除的边缘添加方法,促进公平。我们提出了两个模型 - 不可知的算法来执行边缘编辑:蛮力方法和连续近似方法,公平。Fairedit通过利用公平损失的梯度信息来执行有效的边缘编辑,以找到改善公平性的边缘。我们发现Fairedit优于许多数据集和GNN方法的标准培训,同时表现了许多最先进的方法,展示了公平的能力,以改善许多领域和模型的公平性。
translated by 谷歌翻译
我们通过形式化节点标签的异质性(即连接的节点倾向于具有不同的标签)和GNN与对抗性攻击的稳健性来弥合图形神经网络(GNN)的两个研究方向。我们的理论和经验分析表明,对于同质图数据,有影响力的结构攻击始终导致同质性降低,而对于异性图数据,同质级别的变化取决于节点度。这些见解对防御对现实图形的攻击具有实际含义:我们推断出分离自我和邻居限制的汇总器,这是一种已确定的设计原则,可以显着改善异性图数据的预测,还可以为增强的鲁棒性提供稳健性gnns。我们的综合实验表明,与表现最好的未接种模型相比,GNN仅采用这种设计可以提高经验和可证明的鲁棒性。此外,与表现最佳的疫苗接种模型相比,这种设计与对抗性攻击的明确防御机制相结合,可提高稳健性,攻击性能在攻击下提高18.33%。
translated by 谷歌翻译
The pandemic of these very recent years has led to a dramatic increase in people wearing protective masks in public venues. This poses obvious challenges to the pervasive use of face recognition technology that now is suffering a decline in performance. One way to address the problem is to revert to face recovery methods as a preprocessing step. Current approaches to face reconstruction and manipulation leverage the ability to model the face manifold, but tend to be generic. We introduce a method that is specific for the recovery of the face image from an image of the same individual wearing a mask. We do so by designing a specialized GAN inversion method, based on an appropriate set of losses for learning an unmasking encoder. With extensive experiments, we show that the approach is effective at unmasking face images. In addition, we also show that the identity information is preserved sufficiently well to improve face verification performance based on several face recognition benchmark datasets.
translated by 谷歌翻译
Differentiable Search Indices (DSIs) encode a corpus of documents in the parameters of a model and use the same model to map queries directly to relevant document identifiers. Despite the strong performance of DSI models, deploying them in situations where the corpus changes over time is computationally expensive because reindexing the corpus requires re-training the model. In this work, we introduce DSI++, a continual learning challenge for DSI to incrementally index new documents while being able to answer queries related to both previously and newly indexed documents. Across different model scales and document identifier representations, we show that continual indexing of new documents leads to considerable forgetting of previously indexed documents. We also hypothesize and verify that the model experiences forgetting events during training, leading to unstable learning. To mitigate these issues, we investigate two approaches. The first focuses on modifying the training dynamics. Flatter minima implicitly alleviate forgetting, so we optimize for flatter loss basins and show that the model stably memorizes more documents (+12\%). Next, we introduce a generative memory to sample pseudo-queries for documents and supplement them during continual indexing to prevent forgetting for the retrieval task. Extensive experiments on novel continual indexing benchmarks based on Natural Questions (NQ) and MS MARCO demonstrate that our proposed solution mitigates forgetting by a significant margin. Concretely, it improves the average Hits@10 by $+21.1\%$ over competitive baselines for NQ and requires $6$ times fewer model updates compared to re-training the DSI model for incrementally indexing five corpora in a sequence.
translated by 谷歌翻译
Large language models (LLMs) have shown impressive results across a variety of tasks while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios. We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in this setting. We propose and study Attributed QA as a key first step in the development of attributed LLMs. We develop a reproducable evaluation framework for the task, using human annotations as a gold standard and a correlated automatic metric that we show is suitable for development settings. We describe and benchmark a broad set of architectures for the task. Our contributions give some concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and give some hints as to how to address a third key question (How to build LLMs with attribution?).
translated by 谷歌翻译
A clustering termination procedure which is locally adaptive (with respect to the hierarchical tree of sets representative of the agglomerative merging) is proposed, for agglomerative hierarchical clustering on a set equipped with a distance function. It represents a multi-scale alternative to conventional scale dependent threshold based termination criteria.
translated by 谷歌翻译
实例级图像检索(IIR)或简单的实例检索,涉及在数据集中查找包含查询实例(例如对象)的数据集中所有图像的问题。本文首次尝试使用基于实例歧视的对比学习(CL)解决此问题。尽管CL在许多计算机视觉任务中表现出令人印象深刻的性能,但在IIR领域也从未找到过类似的成功。在这项工作中,我们通过探索从预先训练和微调的CL模型中得出判别表示的能力来解决此问题。首先,我们通过比较预先训练的深度神经网络(DNN)分类器与CL模型学到的功能相比,研究了IIR转移学习的功效。这些发现启发了我们提出了一种新的培训策略,该策略通过使用平均精度(AP)损失以及微调方法来学习针对IIR量身定制的对比功能表示形式,从而优化CL以学习为导向IIR的功能。我们的经验评估表明,从挑战性的牛津和巴黎数据集中的预先培训的DNN分类器中学到的现成的特征上的表现显着提高。
translated by 谷歌翻译
夜间使用常规视觉摄像机运行的机器人由于噪声受限图像而在重建中面临重大挑战。先前的工作表明,爆发成像技术可用于部分克服这一问题。在本文中,我们开发了一种新型的功能检测器,该功能检测器直接在图像爆发上运行,从而在极低的光线条件下增强了基于视觉的重建。我们的方法通过在多尺度和多运动空间中共同搜索,在每次爆发中找到了定义明确的尺度和明显运动的关键点。因为我们在图像具有较高信噪比的阶段描述了这些功能,因此检测到的特征比常规嘈杂图像和突发的图像和表现出高度精确的最新特征更准确和匹配性能。我们显示了提高功能性能和摄像头姿势估计值,并在挑战光限制的场景中使用功能检测器展示了改进的结构,从而改善了结构。我们的功能Finder为在弱光方案和应用程序(包括夜间操作)中运行的机器人提供了重要的一步。
translated by 谷歌翻译
我们介绍了StreamNet,这是一种自动编码器体系结构,用于分析大量白质流线的高度异质几何形状。该提出的框架利用了Wasserstein-1度量的几何形状赋值特性,以实现整个流线束的直接编码和重建。我们表明,该模型不仅可以准确捕获人群中流线的分布结构,而且还能够在真实和合成流线之间实现出色的重建性能。使用最新的ART捆绑包比较度量标准,对40个健康对照的T1加权扩散成像产生的白质流线评估了实验模型性能。
translated by 谷歌翻译